6 research outputs found

    Hybrid Classical/Machine-Learning Force Fields for the Accurate Description of Molecular Condensed-Phase Systems

    Full text link
    Electronic structure methods offer in principle accurate predictions of molecular properties, however, their applicability is limited by computational costs. Empirical methods are cheaper, but come with inherent approximations and are dependent on the quality and quantity of training data. The rise of machine learning (ML) force fields (FFs) exacerbates limitations related to training data even further, especially for condensed-phase systems for which the generation of large and high-quality training datasets is difficult. Here, we propose a hybrid ML/classical FF model that is parametrized exclusively on high-quality ab initio data of dimers and monomers in vacuum but is transferable to condensed-phase systems. The proposed hybrid model combines our previous ML-parametrized classical model with ML corrections for situations where classical approximations break down, thus combining the robustness and efficiency of classical FFs with the flexibility of ML. Extensive validation on benchmarking datasets and experimental condensed-phase data, including organic liquids and small-molecule crystal structures, showcases how the proposed approach may promote FF development and unlock the full potential of classical FFs

    Energy-Based Clustering: Fast and Robust Clustering of Data with Known Likelihood Functions

    Full text link
    Clustering has become an indispensable tool in the presence of increasingly large and complex data sets. Most clustering algorithms depend, either explicitly or implicitly, on the sampled density. However, estimated densities are fragile due to the curse of dimensionality and finite sampling effects, for instance in molecular dynamics simulations. To avoid the dependence on estimated densities, an energy-based clustering (EBC) algorithm based on the Metropolis acceptance criterion is developed in this work. In the proposed formulation, EBC can be considered a generalization of spectral clustering in the limit of large temperatures. Taking the potential energy of a sample explicitly into account alleviates requirements regarding the distribution of the data. In addition, it permits the subsampling of densely sampled regions, which can result in significant speed-ups and sublinear scaling. The algorithm is validated on a range of test systems including molecular dynamics trajectories of alanine dipeptide and the Trp-cage miniprotein. Our results show that including information about the potential-energy surface can largely decouple clustering from the sampling density

    Machine Learning in QM/MM Molecular Dynamics Simulations of Condensed-Phase Systems

    Full text link
    Quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations have been developed to simulate molecular systems, where an explicit description of changes in the electronic structure is necessary. However, QM/MM MD simulations are computationally expensive compared to fully classical simulations as all valence electrons are treated explicitly and a self-consistent field (SCF) procedure is required. Recently, approaches have been proposed to replace the QM description with machine learned (ML) models. However, condensed-phase systems pose a challenge for these approaches due to long-range interactions. Here, we establish a workflow, which incorporates the MM environment as an element type in a high-dimensional neural network potential (HDNNP). The fitted HDNNP describes the potential-energy surface of the QM particles with an electrostatic embedding scheme. Thus, the MM particles feel a force from the polarized QM particles. To achieve chemical accuracy, we find that even simple systems require models with a strong gradient regularization, a large number of data points, and a substantial number of parameters. To address this issue, we extend our approach to a delta-learning scheme, where the ML model learns the difference between a reference method (DFT) and a cheaper semi-empirical method (DFTB). We show that such a scheme reaches the accuracy of the DFT reference method, while requiring significantly less parameters. Furthermore, the delta-learning scheme is capable of correctly incorporating long-range interactions within a cutoff of 1.4 nm. It is validated by performing MD simulations of retinoic acid in water and the interaction between S-adenoslymethioniat with cytosine in water. The presented results indicate that delta-learning is a promising approach for (QM)ML/MM MD simulations of condensed-phase systems

    Regularized by Physics: Graph Neural Network Parametrized Potentials for the Description of Intermolecular Interactions

    No full text
    Simulations with an explicit description of intermolecular forces using electronic structure methods are still not feasible for many systems of interest. As a result, empirical methods such as force fields (FF) have become an established tool for the simulation of large and complex molecular systems. However, the parametrization of FF is time consuming and has traditionally been based largely on experimental data, which is scarce for many functional groups. Recent years have therefore seen increasing efforts to automatize FF parametrization and a move towards FF fitted against quantum-mechanical reference data. Here, we propose an alternative strategy to parametrize intermolecular interactions, which makes use of machine learning and gradient-descent based optimization while retaining a functional form founded in physics. This strategy can be viewed as generalization of existing FF parametrization methods. In the proposed approach, graph neural networks are used in conjunction with automatic differentiation to parametrize physically motivated models to potential-energy surfaces, enabling full automatization and broad applicability in chemical space. As a result, highly accurate FF models are obtained which retain the computational efficiency, interpretability and robustness of classical FF. To showcase the potential of the proposed method, both a fixed-charge model and a polarizable model are parametrized for intermolecular interactions and applied to a wide range of systems including dimer dissociation curves and condensed-phase systems

    Learning Atomic Multipoles: Prediction of the Electrostatic Potential with Equivariant Graph Neural Networks

    No full text
    The accurate description of electrostatic interactions remains a challenging problem for classical potential-energy functions. The commonly used fixed partial-charge approximation fails to reproduce the electrostatic potential at short range due to its insensitivity to conformational changes and anisotropic effects. At the same time, possibly more accurate machine-learned (ML) potentials struggle with the long-range behavior due to their inherent locality ansatz. Employing a multipole expansion offers in principle an exact treatment of the electrostatic potential such that the long-range and short-range electrostatic interactions can be treated simultaneously with high accuracy. However, such an expansion requires the calculation of the electron density using computationally expensive quantum-mechanical (QM) methods. Here, we introduce an equivariant graph neural network (GNN) to address this issue. The proposed model predicts atomic multipoles up to the quadrupole, circumventing the need for expensive QM computations. By using an equivariant architecture, the model enforces the correct symmetry by design without relying on local reference frames. The GNN reproduces the electrostatic potential of various systems with high fidelity. Possible uses for such an approach include the separate treatment of long-range interactions in ML potentials, the analysis of electrostatic potential surfaces, and static multipoles in polarizable force fields.ISSN:1549-9618ISSN:1549-962
    corecore